Tags: llm* + question answering*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The article proposes a new framework, LongRAG, that aims to improve the performance of Retrieval-Augmented Generation (RAG) by using long retriever and reader components. LongRAG processes Wikipedia into larger 4K-token units, reducing the total units from 22M to 600K, thus decreasing the burden on the retriever. The top-k retrieved units (≈30K tokens) are then fed to a long-context Language Model for zero-shot answer extraction. LongRAG achieves EM of 62.7% on NQ and 64.3% on HotpotQA (full-wiki), which is on par with the state-of-the-art model.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llm+question answering"

About - Propulsed by SemanticScuttle